Goto

Collaborating Authors

 civil society


Levers of Power in the Field of AI

Mackenzie, Tammy, Punj, Sukriti, Perez, Natalie, Bhaduri, Sreyoshi, Radeljic, Branislav

arXiv.org Artificial Intelligence

This paper examines how decision makers in academia, government, business, and civil society navigate questions of power in implementations of artificial intelligence (AI). The study explores how individuals experience and exercise "levers of power," which are presented as social mechanisms that shape institutional responses to technological change. The study reports on the responses of personalized questionnaires designed to gather insight on a decision maker's institutional purview, based on an institutional governance framework developed from the work of Neo Institutionalists. Findings present the anonymized, real responses and circumstances of respondents in the form of twelve fictional personas of high-level decision makers from North America and Europe. These personas illustrate how personal agency, organizational logics, and institutional infrastructures may intersect in the governance of AI. The decision makers' responses to the questionnaires then inform a discussion of the field level personal power of decision-makers, methods of fostering institutional stability in times of change, and methods of influencing institutional change in the field of AI. The final section of the discussion presents a table of the dynamics of the levers of power in the field of AI for change makers and 5 testable hypotheses for institutional and social movement researchers. In summary, this study provides insight on the means for policymakers within institutions and their counterparts in civil society to personally engage with AI governance.


The Trump Administration Is Coming for Nonprofits. They're Getting Ready

WIRED

The Trump Administration Is Coming for Nonprofits. As the Trump administration threatens them, liberal nonprofits have been quietly preparing to do everything from surrendering 501(c)(3) status to relocating outside the US. President Donald Trump listens as White House deputy chief of staff for policy Stephen Miller speaks on April 29, 2025, in Warren, Michigan. Within hours of the murder of conservative podcaster and activist Charlie Kirk--and in the absence of a suspect--high-profile figures on the right, from vice president JD Vance to deputy White House chief of staff for policy Stephen Miller, already had a different culprit in mind: nonprofit organizations. On September 11, a day after Kirk's murder, US representative Chip Roy, a Republican of Texas, sent a letter to request the formation of a select committee on "the money, influence, and power behind the radical left's assault on America and the rule of law."


Civil Society in the Loop: Feedback-Driven Adaptation of (L)LM-Assisted Classification in an Open-Source Telegram Monitoring Tool

Pustet, Milena, Steffen, Elisabeth, Mihaljević, Helena, Stanjek, Grischa, Illies, Yannis

arXiv.org Artificial Intelligence

The role of civil society organizations (CSOs) in monitoring harmful online content is increasingly crucial, especially as platform providers reduce their investment in content moderation. AI tools can assist in detecting and monitoring harmful content at scale. However, few open-source tools offer seamless integration of AI models and social media monitoring infrastructures. Given their thematic expertise and contextual understanding of harmful content, CSOs should be active partners in co-developing technological tools, providing feedback, helping to improve models, and ensuring alignment with stakeholder needs and values, rather than as passive 'consumers'. However, collaborations between the open source community, academia, and civil society remain rare, and research on harmful content seldom translates into practical tools usable by civil society actors. This work in progress explores how CSOs can be meaningfully involved in an AI-assisted open-source monitoring tool of anti-democratic movements on Telegram, which we are currently developing in collaboration with CSO stakeholders.


Addressing the Regulatory Gap: Moving Towards an EU AI Audit Ecosystem Beyond the AIA by Including Civil Society

Hartmann, David, de Pereira, José Renato Laranjeira, Streitbörger, Chiara, Berendt, Bettina

arXiv.org Artificial Intelligence

The European legislature has proposed the Digital Services Act (DSA) and Artificial Intelligence Act (AIA) to regulate platforms and Artificial Intelligence (AI) products. We review to what extent third-party audits are part of both laws and to what extent access to models and data is provided. By considering the value of third-party audits and third-party data access in an audit ecosystem, we identify a regulatory gap in that the Artificial Intelligence Act does not provide access to data for researchers and civil society. Our contributions to the literature include: (1) Defining an AI audit ecosystem that incorporates compliance and oversight. (2) Highlighting a regulatory gap within the DSA and AIA regulatory framework, preventing the establishment of an AI audit ecosystem. (3) Emphasizing that third-party audits by research and civil society must be part of that ecosystem and demand that the AIA include data and model access for certain AI products. We call for the DSA to provide NGOs and investigative journalists with data access to platforms by delegated acts and for adaptions and amendments of the AIA to provide third-party audits and data and model access at least for high-risk systems to close the regulatory gap. Regulations modeled after European Union AI regulations should enable data access and third-party audits, fostering an AI audit ecosystem that promotes compliance and oversight mechanisms.


New AI video tools increase worries of deepfakes ahead of elections

Al Jazeera

The video that OpenAI released to unveil its new text-to-video tool, Sora, has to be seen to be believed. The demonstration reportedly prompted movie producer Tyler Perry to pause an 800m studio investment. Tools like Sora promise to translate a user's vision into realistic moving images with a simple text prompt, the logic goes, making studios obsolete. Others worry that artificial intelligence (AI) like this could be exploited by those with darker imaginations. Malicious actors could use these services to create highly realistic deepfakes, confusing or misleading voters during an election or simply causing chaos by seeding divisive rumours.


Downing Street trying to agree statement about AI risks with world leaders

The Guardian

Rishi Sunak's advisers are trying to thrash out an agreement among world leaders on a statement warning about the risks of artificial intelligence as they finalise the agenda for the AI safety summit next month. Downing Street officials have been touring the world talking to their counterparts from China to the EU and the US as they work to agree on words to be used in a communique at the two-day conference. But they are unlikely to agree a new international organisation to scrutinise cutting-edge AI, despite interest from the UK in giving the government's AI taskforce a global role. Sunak's AI summit will produce a communique on the risks of AI models, provide an update on White House-brokered safety guidelines and end with "like-minded" countries debating how national security agencies can scrutinise the most dangerous versions of the technology. The possibility of some form of international cooperation on cutting-edge AI that can pose a threat to human life will also be discussed on the final day of the summit on 1 and 2 November at Bletchley Park, according to a draft agenda seen by the Guardian.


An inside look at Congress's first AI regulation forum

MIT Technology Review

The AI Insight Forums were announced a few months ago by Senate Majority Leader Chuck Schumer as part of his "SAFE Innovation" initiative, which is really a set of principles for AI legislation in the United States. The invite list was heavily skewed toward Big Tech execs, including CEOs of AI companies, though a few civil society and AI ethics researchers were included too. Coverage of the meeting thus far has put a particular emphasis on the reportedly unanimous agreement about the need for AI regulation, and on issues raised by Elon Musk and others about the "civilizational risks" created by AI. (This tracker from Tech Policy Press is pretty handy if you want to know more.) But to really dig below the surface, I caught up with one of the other attendees, Inioluwa Deborah Raji, who gave me an inside look at how the first meeting went, the pernicious myths she needed to debunk, and where disagreements could be felt in the room. Raji is a researcher at the University of California, Berkeley, and a fellow at Mozilla.


Introducing LLaMA: A foundational, 65-billion-parameter language model

#artificialintelligence

As part of Meta's commitment to open science, today we are publicly releasing LLaMA (Large Language Model Meta AI), a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don't have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like LLaMA is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others' work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making LLaMA available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a LLaMA model card that details how we built the model in keeping with our approach to Responsible AI practices.


Saying No to Surveillance State

#artificialintelligence

Recently, an RTI filed by the Internet Freedom Foundation (IFF) revealed that the Delhi Police is using Facial recognition technology (FRT) to nab rioters in the capital city. This has caused an uproar as many members of the civil society raised concerns and called the Delhi Police's use of FRT'unethical' in the absence of a Data Protection Act in the country. The argument being made by them is national security should not come at the cost of privacy. Technology such as FRT has been controversial, and authorities leveraging such tech is definitely a concern. The RTI filed by IFF revealed that the procurement of the FRT by the Delhi Police was authorised as per a 2018 direction of the Delhi High Court in Sadhan Haldar v NCT of Delhi.


Computer scientist aims to protect people in age of artificial intelligence

#artificialintelligence

As data-driven technologies transform the world and artificial intelligence raises questions about bias, privacy and transparency, Suresh Venkatasubramanian is offering his expertise to help create guardrails to ensure that technologies are developed and deployed responsibly. "We need to protect the American people and make sure that technology is used in ways that reinforce our highest values," said Venkatasubramanian, a professor of computer science and data science at Brown University. On the heels of a recently concluded 15-month appointment as an advisor to the White House Office of Science and Technology Policy, Venkatasubramanian returned to Washington, D.C., on Tuesday, Oct. 4, for the unveiling of "A Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People," during a ceremony at the White House. Venkatasubramanian said the blueprint represents the culmination of 14 months of research and collaboration led by the Office of Science and Technology Policy with partners across the federal government, academia, civil society, the private sector and communities around the country. That collaboration informed the development of the first-ever national guidance focused on the use and deployment of automated technologies that have the potential to impact people's rights, opportunities and access to services.